Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Starting May 8, every Instagram DM becomes readable by the same company that sells ads against everything else you do on the platform:
Meta is quietly dismantling one of its few genuine privacy commitments. Starting May 8, end-to-end encryption for Instagram direct messages disappears, taking with it the one technical guarantee that kept those conversations private from Meta itself.
"If you have chats that are impacted by this change, you will see instructions on how you can download any media or messages you may want to keep," the company said in a help document, framing the loss of message privacy as a data export problem. Collect your things, the walls are coming down.
The feature being removed was never universal anyway. End-to-end encryption for Instagram DMs had been available only in certain regions, not enabled by default, since Meta began testing it in 2021 as part of what CEO Mark Zuckerberg called his "privacy-focused vision for social networking."
[...] The timing is revealing. TikTok told the BBC last week that it has no plans to bring end-to-end encryption to its DMs, arguing that privacy makes users less safe. Meta is now arriving at the same destination from a different direction.
https://www.righto.com/2019/11/ibm-sonic-delay-lines-and-history-of.html
What explains the popularity of terminals with 80×24 and 80×25 displays? A recent blog post "80x25" motivated me to investigate this. The source of 80-column lines is clearly punch cards, as commonly claimed. But why 24 or 25 lines? There are many theories, but I found a simple answer: IBM, in particular its dominance of the terminal market. In 1971, IBM introduced a terminal with an 80×24 display (the 3270) and it soon became the best-selling terminal, forcing competing terminals to match its 80×24 size. The display for the IBM PC added one more line to its screen, making the 80×25 size standard in the PC world. The impact of these systems remains decades later: 80-character lines are still a standard, along with both 80×24 and 80×25 terminal windows.
In this blog post, I'll discuss this history in detail, including some other systems that played key roles. The CRT terminal market essentially started with the IBM 2260 Display Station in 1965, built from curious technologies such as sonic delay lines. This led to the popular IBM 3270 display and then widespread, inexpensive terminals such as the DEC VT100. In 1981, IBM released a microcomputer called the DataMaster. While the DataMaster is mostly forgotten, it strongly influenced the IBM PC, including the display. This post also studies reports on the terminal market from the 1970s and 1980s; these make it clear that market forces, not technological forces, led to the popularity of various display sizes.
Frame-dragging may explain an odd pattern seen in the brightest supernovae:
Some of the most extreme explosions in the universe are Type I superluminous supernovae. "They are one of the brightest explosions in the Universe," says Joseph Farah, an astrophysicist at the University of California, Santa Barbara. For years, astrophysicists tried to understand what exactly makes superluminous supernovae so absurdly powerful. Now it seems like we may finally have some answers.
Farah and his colleagues have found that these events are most likely powered by magnetars, rapidly spinning neutron stars that warp the very space and time around them.
Magnetars have been a leading candidate for the engine behind superluminous supernovae. The theory says these insanely magnetized stars are born from the collapsing core of the original progenitor star and emit energy via magnetic dipole radiation. "This core is roughly a one solar mass object that gets crushed down to the size of a city," Farah explains. As its spin slows down, a magnetar bleeds its rotational energy into the expanding material of the dead star, lighting it up.
The problem was that this theory did not quite explain observations. In a standard magnetar model, the light curve of the supernova should rise rapidly and then fade away evenly as the neutron star loses its rotational energy. "This way the light curve, in the prediction of this model, just goes up and then down quite smoothly," Farah says. But when astronomers observe superluminous supernovae, they almost never see this smooth fade. Instead, they see bumps, wiggles, and strange modulations. The light curve flickers over months.
For a while, scientists tried to patch the magnetar engine theory to fit observations. Maybe the expanding debris was slamming into irregular shells of material shed by the star before it died. Or perhaps the magnetar engine was spitting out random, violent flares. But these explanations required highly specific, fine-tuned parameters to match what we were seeing through our telescopes.
The solution to the strange flickering problem came when the Liverpool Gravitational Wave Optical Transient Observer collaboration detected an object designated SN 2024afav on December 12, 2024. Initially, the object looked like a standard superluminous supernova. "It was as bright and it had bumps in the light curve like many other objects of this kind," Farah says. But as the telescopes kept watching, it started doing something unprecedented: It started to chirp.
[...] The flickering in the superluminous supernovae, Farah hypothesized, was caused by the extreme gravity of a newborn magnetar dragging the very spacetime around it along as it was spinning.
To understand Farah's Lense-Thirring solution, imagine a bowling ball spinning in a vat of molasses. As the ball rotates, friction drags the sticky fluid along, creating a swirling vortex. According to Einstein's General Relativity, mass and energy can warp the fabric of spacetime, so if a sufficiently large mass is spinning rapidly, it drags the space-time along in a manner similar to the molasses. Around Earth, this effect is minuscule. But around a newborn magnetar, which is far more massive and spinning hundreds of times a second, spacetime is whipped into a violent, twisting frenzy.
When the progenitor star exploded to create SN 2024afav, it didn't eject all of its material perfectly. Some of the stellar guts failed to escape and fell back toward the newborn magnetar, forming a small accretion disk around it. Crucially, this disk was misaligned, tilted relative to the rotational axis of the magnetar. Because the disk was tilted in this aggressively twisted spacetime, the Lense-Thirring effect forced the entire disk to wobble, or precess, around the magnetar's spin axis like a top that was spinning ever more slowly.
As this misaligned disk wobbled, it acted like a giant cosmic lampshade: it periodically blocked, reflected, or redirected the intense radiation and jets spewing from the central magnetar. The high-energy photons emitted by the magnetar had to fight their way through the expanding supernova ejecta, getting reprocessed into optical light and diffusing outward over a span of about 15 days. Observed through our telescopes on Earth, this wobbling disk created a rhythmic fluctuation in the superluminous supernova's brightness.
[...] This model, though, still has many unanswered questions. "How the accretion disk forms, how it blocks or modulates the light from the magnetar, how that light then gets to the ejecta, and finally how it gets to the observer," Farah listed. "Basically every step along the way we made the best assumptions we could." For each of these steps, he admits, there were at least five different ways it could happen, and the team just went with their best guess of what was going on.
Journal Reference: Farah, J.R., Prust, L.J., Howell, D.A. et al. Lense–Thirring precessing magnetar engine drives a superluminous supernova. Nature 651, 321–325 (2026). https://doi.org/10.1038/s41586-026-10151-0
Musk can't convince judge public doesn't care about where AI training data comes from:
Elon Musk's xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law that requires AI firms to publicly share information about their training data.
xAI had tried to argue that California's Assembly Bill 2013 (AB 2013) forced AI firms to disclose carefully guarded trade secrets.
The law requires AI developers whose models are accessible in the state to clearly explain which dataset sources were used to train models, when the data was collected, if the collection is ongoing, and whether the datasets include any data protected by copyrights, trademarks, or patents. Disclosures would also clarify whether companies licensed or purchased training data and whether the training data included any personal information. It would also help consumers assess how much synthetic data was used to train the model, which could serve as a measure of quality.
However, this information is precisely what makes xAI valuable, with its intensive data sourcing supposedly setting it apart from its biggest rivals, xAI argued. Allowing enforcement could be "economically devastating" to xAI, Musk's company argued, effectively reducing "the value of xAI's trade secrets to zero," xAI's complaint said. Further, xAI insisted, these disclosures "cannot possibly be helpful to consumers" while supposedly posing a real risk of gutting the entire AI industry.
Specifically, xAI argued that its dataset sources, dataset sizes, and cleaning methods were all trade secrets.
"If competitors could see the sources of all of xAI's datasets or even the size of its datasets, competitors could evaluate both what data xAI has and how much they lack," xAI argued. In one hypothetical, xAI speculated that "if OpenAI (another leading AI company) were to discover that xAI was using an important dataset to train its models that OpenAI was not, OpenAI would almost certainly acquire that dataset to train its own model, and vice versa."
However, in an order issued on Wednesday, US District Judge Jesus Bernal said that xAI failed to show that California's law, which took effect in January, required the company to reveal any trade secrets.
xAI's biggest problem was being too vague about the harms it faced if the law was not halted, the judge said. Instead of explaining why the disclosures could directly harm xAI, the company offered only "a variety of general allegations about the importance of datasets in developing AI models and why they are kept secret," Bernal wrote, describing X as trading in "frequent abstractions and hypotheticals."
He denied xAI's motion for a preliminary injunction while supporting the government's interest in helping the public assess how the latest AI models were trained.
[...] Perhaps most frustrating for xAI as it continues to fight to block the law, Bernal also disputed that the public had no interest in the training data disclosures.
"It strains credulity to essentially suggest that no consumer is capable of making a useful evaluation of Plaintiff's AI models by reviewing information about the datasets used to train them and that therefore there is no substantial government interest advanced by this disclosure statute," Bernal wrote.
He noted that the law simply requires companies to alert the public about information that can feasibly be used to weigh whether they want to use one model over another.
Nothing about the required disclosures is inherently political, the judge suggested, although some consumers might select or avoid certain models with perceived political biases. As an example, Bernal opined that consumers may want to know "if certain medical data or scientific information was used to train a model" to decide if they can trust the model "to be sufficiently comprehensively trained and reliable for the consumer's purposes."
"In the marketplace of AI models, AB 2013 requires AI model developers to provide information about training datasets, thereby giving the public information necessary to determine whether they will use—or rely on information produced by—Plaintiff's model relative to the other options on the market," Bernal wrote.
Chinese Loongson 3C6000 CPUs now have heat spreaders with words in Cyrillic?
The specifications of Tramplin's 16-core Irtysh C616 (2.20 GHz, 32MB L3, quad-channel DDR4-3200 memory, 844.8 GFLOPS, 100W – 120W TDP) and 32-core Irtysh C632 processors (2.10 GHz, 64MB L3, octa-channel DDR4-3200 memory, 1612.8 GFLOPS, 180W – 200W TDP) are identical to those of Loongson's 16-core LS3C6000/S and 32-core LS3C6000/D CPUs down to a single number, which isn't something that happens usually unless we are dealing with the very same silicon.
Indeed, Tramplin Electronics was first registered on April 4, 2025, so the company is less than a year old. It is impossible to develop a processor from scratch (even based on a known/licensed ISA), find a production partner, build its physical design, tape it out, and get samples in this short of a time frame. In fact, a year is barely enough to bring up a new CPU based on an existing platform (this may easily take a couple of years for a company of AMD's or Intel's size), not to mention developing one from scratch. That said, even though the processors were made in the third week of 2026, it looks like these are regular Loongson LS3C6000 CPUs that carry Cyrillic inscriptions.
Now that Russia-based entities cannot legally obtain high-performance CPUs from companies like AMD or Intel, the only way for the country to retain access to more or less contemporary processors is to buy them illegally in nearby countries, or get Chinese processors from the People's Republic. Apparently, we are dealing with the second option here, albeit with an attempt to disguise Chinese processors as those developed in Russia. Interestingly, the source of the river Irtysh — after which the CPUs are named — is in China.
GFiber and Astound to merge with Alphabet selling majority stake to Stonepeak:
Google Fiber, now officially called GFiber, is being sold to private equity firm Stonepeak and will be combined with cable-and-fiber firm Astound Broadband to create a larger Internet service provider.
Google owner Alphabet announced Wednesday that it will keep only a minority stake in the fiber ISP that launched with grand ambitions in 2012 but scaled back its expansion plans in 2016. Alphabet and Astound owner Stonepeak announced "an agreement to combine GFiber with Astound Broadband, creating a leading independent fiber provider," with the merged company to be "majority owned by Stonepeak, an investment firm specializing in infrastructure and real assets."
The deal is subject to regulatory approvals and other closing conditions, with an expected closing date in Q4 of this year. The sale price was not disclosed. The deal will help GFiber take "a major step toward its goal of operational and financial independence" and obtain the "external capital and strategic focus needed to accelerate its next phase of growth," the announcement said.
[...] Astound is already the product of industry consolidation via a series of private equity deals that combined Wave Broadband, RCN, and Grande Communications. A research note from the New Street analyst firm said GFiber offers service at 2.8 million locations in 15 states, while Astound's service area has 4.45 million locations in 12 states and the District of Columbia. Most of Astound's network is cable broadband, but it has 892,014 fiber locations and 44,548 copper locations.
[...] The combined GFiber/Astound company will face competition in most of its territory from at least one cable or fiber/copper provider. That includes AT&T at 53 percent of locations, Comcast at 46 percent of locations, Charter at 43 percent of locations, Verizon at 22 percent of locations, and Lumen (CenturyLink) at 11 percent.
The reliable internet connections provided by Starlink offer a huge advantage on the battlefield. But as access is dependent on the whims of controversial billionaire Elon Musk, militaries are looking to build their own version:
Starlink’s satellite constellation provides a reliable internet connection to almost anywhere on Earth, conferring an advantage on the modern battlefield. But it is also run by controversial billionaire Elon Musk, presenting a risk to militaries that could easily find themselves cut off. So, now countries are racing to build their own version.
The Starlink network consists of almost 10,000 satellites that offer internet connections across most of the planet via small dishes on the ground. The company says it has more than 10 million paying civilian customers, but the service is also used militarily. Modern warfare is a data-intensive business, with intelligence, video feeds and drone control instructions being beamed back and forth 24 hours a day.
Unlike radios, which can be easily jammed by adversaries, Starlink’s signals point straight up from ground stations to space and are relatively robust. And because receivers are cheap, they can be issued to small military units and even used on remotely operated ground and aerial drones.
But in a world where global tensions are ratcheting up and states are seeking sovereignty in everything from computer chip manufacture to nuclear deterrence, relying on a foreign service like Starlink to coordinate troops is considered increasingly risky. Especially when it is controlled by a mercurial figure like Musk.
Both Ukraine and Russia have used Starlink since the 2022 invasion, with reports suggesting that Russia has guided attack drones with it. But in February, the company restricted access to registered users and effectively shut Russian troops out of the service. The move is reported to have had serious repercussions for Russia’s ability to coordinate its military and provided Ukraine an advantage, at least in the short term. No other nation wants to find itself in the same boat.
The European Union is building its own version called Infrastructure for Resilience, Interconnectivity and Security by Satellite (IRIS²), which will have around 300 satellites, but isn’t due to begin operating until 2030. China is also building the Guowang network, which will have 13,000 satellites, but currently has fewer than 200, and the Qianfan constellation, which is also still in the early stages of construction. Russia’s planned Sfera constellation has encountered delays.
Even European states are working to develop their own versions separate from the EU. Germany is in talks to create its own network, which is still on the drawing board, and the UK retains a stake in satellite internet provider Eutelsat OneWeb, having saved its precursor from bankruptcy because the technology was so important. A British start-up called OpenCosmos is also working on a similar system, ironically with backing from US intelligence agency the CIA.
Anthony King at the University of Exeter, UK, says it is “striking” that a private communications company can hold such a powerful position on the world stage today, able to allow or deny an advantage in future conflicts, but that affluent superpowers will catch up given time. “Of course, the Chinese will have one, and do have one [of current lesser size], so they will have secure satellite digital communications in any future conflict,” he says.
Although Starlink is a private company, Barry Evans at the University of Surrey, UK, says it was heavily funded for strategic reasons by the US government and even offers a more secure militarised version called Starshield.
“You’ve got governments relying on an individual, which is one of the things that worries Europe,” says Evans. “[Musk] turns it off in various countries at various times. There’s a lot happening and, for the UK, it’s quite worrying because we don’t have the funding, really, to launch our own system.”
The Iran war is going to be costly for the tech sector:
While Meta announced the completion of the core of the project, it’s still working on the Pearls section of the network, which was intended to connect Persian Gulf states, including Iraq, Kuwait, Saudi Arabia, Bahrain, Qatar, United Arab Emirates, and Oman, as well as Pakistan and India, to the rest of Africa and several European countries. The publication says that the bulk of the undersea cable has already been laid but is yet to be connected to the onshore landing stations.
Aside from the trouble in the Middle East, undersea cables in Europe and East Asia are constantly under threat from being cut by ships that are part of “shadow fleets” — vessels with murky ownerships but often indirectly controlled by states like Russia and China conducting hybrid warfare. Because of this, Meta has been planning to build a 50,000-km (30,000-mile) long undersea cable that will bypass current geopolitical hotspots called Project Waterworth. But despite being announced for 2025, it’s expected that it will take several more years before it is completed and goes online.
Chipmaker Nvidia is preparing to launch its own open source AI agent platform to compete with the likes of OpenClaw, according to a recent Wired report.
The magazine cites "people familiar with the company's plans" in reporting that Nvidia has been pitching the platform, which it is calling NemoClaw, to various corporate partners ahead of its annual developer conference next week. Salesforce, Cisco, Google, Adobe, and CrowdStrike are among the companies said to be in talks for those partnerships, though it's unclear what specific benefits those companies would receive for their association with the open source tool.
NemoClaw, as the somewhat awkward name suggests, would be a direct competitor of OpenClaw (previously known as Moltbot and Clawdbot), the system that attracted widespread attention in January for letting users direct "always-on" AI agents from their personal machines, using any number of underlying models. Last month, OpenAI hired OpenClaw creator Peter Steinberger "to drive the next generation of personal agents," as founder Sam Altman put it, though the OpenClaw project will be run by an independent foundation with OpenAI's support.
And that is not nearly enough:
Teenagers across the country are getting less sleep, a researcher from the University of Connecticut reports on March 2 in JAMA. And the problem appears to be societal.
Teens not getting enough sleep has been reported as a problem in the medical literature since at least the turn of the 20th century: a 1905 study in The Lancet of the sleep hours of boys in British boarding schools worried that they were not getting enough sleep due to nighttime lighting, and suggested that "late to bed and early to rise is neither physiological nor wise."
Later, in the 1950s, public concern focused on evening entertainment such as radio and television keeping teens awake too late. More recently, research has connected too little sleep with overstimulation, mental health problems, accidents, and academic challenges.
But teens are getting even less sleep than they used to, report UConn School of Medicine psychiatric epidemiologist T. Greg Rhee and colleagues in their latest look at the Youth Risk Behavior Survey done by the Centers for Disease Control and Prevention (CDC). The Youth Risk Behavior Survey provides nationally-representative data to examine long-term trends of risk behaviors in teens.
Rhee and his colleagues' analysis of the survey data from 2007 to 2023 shows more than 50% of teens are reporting less than 5 hours of sleep a night in the most recent survey, more sleep deprived teens than in any previous survey. Less than five hours of sleep a night is considered very short sleep, and is associated with emotional regulation issues such as anxiety and depression, poor academic performance or neurocognitive development, and increased risks for obesity and diabetes.
Teens getting less than 5 hours of sleep a night increased across all subgroups in the most recent survey, whether they had risk factors such as depressive thoughts, using controlled substances, or had large amounts of screen time, or no risk factors at all.
The number of teens getting sufficient sleep, defined as eight or more hours a night, dropped from more than 30% in 2007 to less than 25% in 2023.
"These trends highlight the need for population-level interventions among teens. For example, later school start times can help with longer sleep, which may lead to better mental health outcomes and greater academic engagement," said Rhee and his colleagues. More research is needed into which interventions might be effective on the population level. For example, Rhee suggests researchers examine whether reforming academic or extracurricular schedules to reduce evening demands could improve sleep health among teens.
Journal Reference: 10.1001/jama.2026.1417
Amazon Expands Health AI Access For Virtual Health Care
Health AI will be available to more US Amazon users this year.
Other functions Health AI can help out with include filling prescription refills through Amazon Pharmacy and connecting you with One Medical doctors via video, in-person appointments or messages.
Health AI isn't the first chatbot to offer health guidance. In January, OpenAI introduced ChatGPT Health, which similarly answers health questions, deciphers lab results and connects to your Apple Health app and medical records with your permission. Anthropic likewise introduced Claude for Healthcare, which connects to Apple Health and can help you understand and sort through health care tasks such as medical bills. Apple is also rumored to be working on its own AI health coach or assistant.
There have been concerns about chatbots and their use for sensitive topics such as health advice, because misguidance can cause harm. We would always caution against inputting private, sensitive information into AI bots and to take their advice with a grain of salt, since AI is known to hallucinate. Double-check with your provider about any health care advice an AI chatbot gives you.
Amazon's virtual assistant is said to be HIPAA-compliant and intended only for medical support, not to replace a health care provider. Health AI will be able to answer questions such as: "Can you explain my recent cholesterol results and what they mean for me?" or "What allergy medications are safe with my current prescriptions?"
An Amazon representative didn't immediately respond to a request for further comment.
AI Chatbots Miss More Than Half Of Medical Diagnoses, Study Finds
https://www.cnet.com/health/medical/chatbots-miss-medical-diagnoses/
ChatGPT was one of the chatbots used in Nature Medicine's study.
The study acknowledged that LLMs now achieve scores on medical knowledge benchmarks comparable to passing the US Medical Licensing Exam, and that clinical documents from LLMs "are rated as equivalent to or better than those written by doctors."
However, a problem was revealed when the study's participants tried to get the same results by asking the LLM questions but were not successful. This is because users often didn't provide enough information, the study found. It reports that in 16 of 30 sampled interactions, initial messages contained only partial information.
"In two cases, LLMs provided initially correct responses but added new and incorrect responses after the users added additional details," the study said, suggesting that conversing more with the chatbots did not improve the probability of receiving a correct medical diagnosis.
After the initial diagnosis, the LLMs provided the correct follow-up steps to the person just 44.2% of the time.
Meta's Llama 3 was one of the large language models used in the study.
According to a survey by OpenAI, which owns ChatGPT, 3 in 5 US adults report using AI for health. "They are using AI to get information when they first feel unwell, consulting it to prepare for their visits with their clinicians, and using it to better comprehend patient instructions and recommendations," OpenAI stated.
And although there's a small disclaimer on ChatGPT's website that reads, "ChatGPT can make mistakes. Check important info," many people do take the chatbot's word for fact.
The study serves as a reminder that ChatGPT and similar chatbots should not be relied upon for medical guidance, particularly in serious situations.
New study challenges notion that aging means decline, finds many older adults improve over time:
Aging in later life is often portrayed as a steady slide toward physical and cognitive decline. But a new study by scientists at Yale University suggests an alternate narrative — that older individuals can and do improve over time, and their mindset toward aging plays a major part in their success.
Analyzing more than a decade of data from a large, nationally representative study of older Americans, lead author Dr. Becca R. Levy, PhD, a professor of social and behavioral sciences at the Yale School of Public Health (YSPH), found that nearly half of adults aged 65 and older showed measurable improvement in cognitive function, physical function, or both, over time.
The improvements were not limited to a small group of exceptional individuals and, notably, were linked to a powerful but often overlooked factor: how people think about aging itself.
"Many people equate aging with an inevitable and continuous loss of physical and cognitive abilities," said Dr. Levy, an international expert on psychosocial determinants of aging health. "What we found is that improvement in later life is not rare, it's common, and it should be included in our understanding of the aging process."
For the study, the researchers followed more than 11,000 participants in the Health and Retirement Study, a federally supported longitudinal survey of older Americans. The research team tracked changes in cognition using a global performance assessment, and physical function using walking speed — often described by geriatricians as a "vital sign" because of its strong links to disability, hospitalization, and mortality.
Over a follow-up period of up to 12 years, 45% of participants improved in at least one of the two domains, according to the study. About 32% improved cognitively, 28% improved physically, and many experienced gains that exceeded thresholds considered clinically meaningful. When participants whose cognitive scores remained stable over that period (rather than declining) were included, more than half defied the stereotype of inevitable deterioration in cognition.
"What's striking is that these gains disappear when you only look at averages," said Dr. Levy, author of the book "Breaking the Age Code: How Your Beliefs About Aging Determine How Long & How Well You Live."
"If you average everyone together, you see decline," Dr. Levy continued. "But when you look at individual trajectories, you uncover a very different story. A meaningful percentage of the older participants that we studied got better."
The authors also examined potential reasons for why some people improve and some do not. They hypothesized that an important factor could be participants' baseline age beliefs — or, specifically, whether they had assimilated more positive or more negative views about aging by the start of the study. In support of this hypothesis, they found that those with more positive age beliefs were significantly more likely to show improvements in both cognition and walking speed, even after accounting for factors such as age, sex, education, chronic disease, depression, and length of follow-up.
[...] The authors hope their findings will reverse the popular perception that continuous decline is inevitable and encourage policy makers to increase their support for preventive care, rehabilitation, and other health-promoting programs for older persons that draw on their potential resilience.
Journal Reference: https://doi.org/10.3390/geriatrics11020028
Every month most people receive a bill for water, gas, electricity, internet, or insurance, and in the future, if the CEO of OpenAI has anything to say about it, a monthly bill for AI use.
OpenAI CEO Sam Altman has sparked debate after suggesting artificial intelligence could soon be sold like electricity or water – with users paying for "intelligence" on a meter.
The ChatGPT creator made the bold prediction during an appearance at the BlackRock US Infrastructure summit in Washington this week
"We see a future where intelligence is a utility, like electricity or water, and people will buy it from us on a meter"
"Fundamentally, our business and I think the business of every other model provider is going to look like selling tokens.
"They may come from bigger or smaller models, it makes them more or less expensive. They may use more or less reasoning, which also makes them more or less expensive.
"They may be running all the time in the background trying to help you out. They may run only when you need them if you want to pay less. They may work super hard and spend hundreds of millions of dollars on a single problem and that's really valuable."
The idea provoked backlash from commentators with one person posting on Twitter to say "I don't need AI to live, dude. I've literally never seen a more delusional tech founder in my life".
Where do you stand on this? Would you sign up for a monthly billing for AI use, or can you live without it?
For decades, scientists have tried to answer a simple question: why be honest when deception is possible? Whether it is a peacock's tail, a stag's roar, or a human's résumé, signals are means to influence others by transmitting information and advantages can be gained by cheating, for example by exaggeration. But if lying pays, why does communication not collapse?
The dominant theory for honest signals has long been the handicap principle, which claims that signals are honest because they are costly to produce. It argues that a peacock's tail, for example, is an honest signal of a male's condition or quality to potential mates because it is so costly to produce. Only a high-quality birds could afford such a handicap, wasting resources growing it, demonstrating their superb quality to females, whereas poor quality males cannot afford such ornaments.
A new synthesis by Szabolcs Számadó, Dustin J. Penn and István Zachar (from the Budapest University of Technology and Economics, University of Veterinary Medicine Vienna and HUN-REN Centre for Ecological Research, respectively) challenges that logic. They argue that honesty does not depend on how costly or wasteful a signal is, but rather on the trade-offs between investments and benefits, faced by signalers.
They explain that signals are not honest because they are costly, instead, honesty evolves when it is beneficial and deception is costly. Previous studies inspired by the handicap principle (refuted by the authors in the paper) misleadingly focused on only the costs of signalling. Yet biological functions, like signalling, cannot be understood in the evolutionary context without their benefits, often realized in the long run.
The new theory, called Signalling Trade-Off Theory, shifts the focus from absolute cost to choice in what to invest. In biology, every organism faces competing demands: investing more in one thing means having less for another. Time spent courting cannot be spent feeding; energy put into bright feathers cannot be used for immune defence. These are trade-offs. And these are also present in economic choices for humans. Crucially, they differ between individuals. A healthy, well-fed animal can afford different choices than a weak or starving one. According to several theoretical studies, signalling trade-offs and not absolute costs define whether deception or honesty evolves.
"Signals, in theory, can be absolutely cost free in terms of immediate energy investment." – István Zachar, one of the authors explains – "Honesty does not come from how much a signal harms you but from what kind of cost-benefit ratio you can realize with it." And this trade-off between investments and benefits is defined by the condition of the individual.
According to theory, honest signals arise when these trade-offs respect the true quality of the individual, i.e. are condition-dependent. High-quality individuals get more return from the same investment than low-quality ones. As a result, the best strategy for a strong individual is to signal more, while the best strategy for a weak individual is to signal less. "Both are behaving optimally," the author says, "but because their trade-offs are different, their signals end up revealing who they are." This is how honesty is defined.
[...] Why does this matter beyond biology? Because the same logic applies to human communication, from advertising to cooperation based on reputation. We all operate under trade-offs (inherited or learnt) between short-term gains and long-term consequences. Signals are reliable when those trade-offs differ across people in ways that make bluffing unprofitable.
"The real question is not 'how costly is this signal?'" – Zachar says – "It is 'what would it cost this person, in terms of what else they could have done, to fake it?'"
By reframing honesty in terms of trade-offs rather than waste, the new theory brings signalling back in line with a broader understanding of evolution: organisms are not rewarded for squandering resources, but for allocating them efficiently under constraints. In that light, honest communication is not a miracle. It is a natural outcome of living in a non-quantum biological world where every choice closes off another.
Journal Reference: Szabolcs Számadó, István Zachar, Dustin J Penn, A general signalling theory: why honest signals are explained by trade-offs rather than costs or handicaps, Journal of Evolutionary Biology, Volume 39, Issue 2, February 2026, Pages 171–189, https://doi.org/10.1093/jeb/voaf144
The Van Allen Probe A satellite spent seven years measuring radiation and nearly 14 years total in space.
Once the mission ended, NASA originally calculated that the probes would fall back to Earth sometime in 2032. The agency acknowledges, though, that it didn't account for the current solar maximum, a period of increased instability on the sun, which leads to more intense space weather events. NASA says the extra solar wind caused drag on the probe, causing a descent faster than initial calculations predicted.
Data from these probes is still used today to measure and predict the impact of solar winds and radiation on communications systems, navigation satellites, power grids and even astronauts in space. The radiation that the Van Allen Probes studied is also the same radiation responsible for all of those gorgeous auroras Earth has been getting lately.
NASA said most of the spacecraft likely burned up as it sped downward through the atmosphere, although some components may have survived.
The agency originally predicted the return for around 7:45 p.m. ET Tuesday, noting that it could take up to 24 hours for the event to occur. It was off by 11 hours.
Before the splashdown, NASA predicted a 1 in 4,200 chance of any wreckage landing somewhere that could cause human harm. The coordinates that the space agency gave Wednesday for the reentry point -- approximately 2 degrees south latitude and 255.3 degrees east longitude -- are just south of the Equator and west of South America, meaning well out over the ocean.
The probe's partner, Van Allen Probe B, is also scheduled to crash back to Earth, but it isn't expected to arrive until 2030 or later.